931 research outputs found

    Cosmology of fermionic dark matter

    Full text link
    We explore a model for a fermionic dark matter particle family which decouples from the rest of the partices when at least all standard model particles are in equilibrium. We calculate the allowed ranges for mass and chemical potential to be compatible with big bang nucleosynthesis (BBN) calculations and WMAP-data for a flat universe with dark energy. Futhermore we estimate the free streaming length for fermions and antifermions to allow comparison to large scale structure data (LLS). We find that for dark matter decoupling when all standard model particles are present even the least restrictive combined BBN calculation and WMAP results allow us to constrain the initial dark matter chemical potential to a highest value of 6.3 times the dark matter temperature. In this case the resulting mass range is at most 1.8 eV < m < 53 eV, where the upper bound scales linearly with the effective degrees of freedom at decoupling. From LSS we find that similar to ordinary warm dark matter models the particle mass has to be larger than approximately 500 eV (meaning the effective degrees of freedom at decoupling have to be > 1000) to be compatible with observations of the Ly alpha forest at high redshift, but still the dark matter chemical potential over temperature ratio can exceed unity.Comment: 14 pages, 13 figures; Submitted to Phys. Rev. Lett. D., minor changes after referee report: references added, several minor extensions (mostly to the introduction). Also conclusion extended with an additional summary plot to clarify the result

    Type Targeted Testing

    Full text link
    We present a new technique called type targeted testing, which translates precise refinement types into comprehensive test-suites. The key insight behind our approach is that through the lens of SMT solvers, refinement types can also be viewed as a high-level, declarative, test generation technique, wherein types are converted to SMT queries whose models can be decoded into concrete program inputs. Our approach enables the systematic and exhaustive testing of implementations from high-level declarative specifications, and furthermore, provides a gradual path from testing to full verification. We have implemented our approach as a Haskell testing tool called TARGET, and present an evaluation that shows how TARGET can be used to test a wide variety of properties and how it compares against state-of-the-art testing approaches

    Intercomparison of Hantzsch and fiber-laser-induced-fluorescence formaldehyde measurements

    Get PDF
    Two gas-phase formaldehyde (HCHO) measurement techniques, a modified commercial wet-chemical instrument based on Hantzsch fluorimetry and a custom-built instrument based on fiber laser-induced fluorescence (FILIF), were deployed at the atmospheric simulation chamber SAPHIR (Simulation of Atmospheric PHotochemistry In a large Reaction Chamber) to compare the instruments' performances under a range of conditions. Thermolysis of para-HCHO and ozonolysis of 1-butene were used as HCHO sources, allowing for calculations of theoretical HCHO mixing ratios. Calculated HCHO mixing ratios are compared to measurements, and the two measurements are also compared. Experiments were repeated under dry and humid conditions (RH 60%) to investigate the possibility of a water artifact in the FILIF measurements. The ozonolysis of 1-butene also allowed for the investigation of an ozone artifact seen in some Hantzsch measurements in previous intercomparisons. Results show that under all conditions the two techniques are well correlated (R2 ≥ 0.997), and linear regression statistics show measurements agree with within stated uncertainty (15% FILIF + 5% Hantzsch). No water or ozone artifacts are identified. While a slight curvature is observed in some Hantzsch vs. FILIF regressions, the potential for variable instrument sensitivity cannot be attributed to a single instrument at this time. Measurements at low concentrations highlight the need for a secondary method for testing the purity of air used in instrument zeroing and the need for further FILIF White cell outgassing experiments

    National or population level interventions addressing the social determinants of mental health - an umbrella review

    Get PDF
    Background: Social circumstances in which people live and work impact the population’s mental health. We aimed to synthesise evidence identifying effective interventions and policies that influence the social determinants of mental health at national or scaled population level. We searched five databases (Cochrane Library, Global Health, MEDLINE, EMBASE and PsycINFO) between Jan 1st 2000 and July 23rd 2019 to identify systematic reviews of population-level interventions or policies addressing a recognised social determinant of mental health and collected mental health outcomes. There were no restrictions on country, sub-population or age. A narrative overview of results is provided. Quality assessment was conducted using Assessment of Multiple Systematic Reviews (AMSTAR 2). This study was registered on PROSPERO (CRD42019140198). Results: We identified 20 reviews for inclusion. Most reviews were of low or critically low quality. Primary studies were mostly observational and from higher income settings. Higher quality evidence indicates more generous welfare benefits may reduce socioeconomic inequalities in mental health outcomes. Lower quality evidence suggests unemployment insurance, warm housing interventions, neighbourhood renewal, paid parental leave, gender equality policies, community-based parenting programmes, and less restrictive migration policies are associated with improved mental health outcomes. Low quality evidence suggests restriction of access to lethal means and multi-component suicide prevention programmes are associated with reduced suicide risk. Conclusion: This umbrella review has identified a small and overall low-quality evidence base for population level interventions addressing the social determinants of mental health. There are significant gaps in the evidence base for key policy areas, which limit ability of national policymakers to understand how to effectively improve population mental health

    Compositional Solution Space Quantification for Probabilistic Software Analysis

    Get PDF
    Probabilistic software analysis aims at quantifying how likely a target event is to occur during program execution. Current approaches rely on symbolic execution to identify the conditions to reach the target event and try to quantify the fraction of the input domain satisfying these conditions. Precise quantification is usually limited to linear constraints, while only approximate solutions can be provided in general through statistical approaches. However, statistical approaches may fail to converge to an acceptable accuracy within a reasonable time. We present a compositional statistical approach for the efficient quantification of solution spaces for arbitrarily complex constraints over bounded floating-point domains. The approach leverages interval constraint propagation to improve the accuracy of the estimation by focusing the sampling on the regions of the input domain containing the sought solutions. Preliminary experiments show significant improvement on previous approaches both in results accuracy and analysis time

    IC-Cut: A Compositional Search Strategy for Dynamic Test Generation

    Get PDF
    Abstract. We present IC-Cut, short for “Interface-Complexity-based Cut”, a new compositional search strategy for systematically testing large programs. IC-Cut dynamically detects function interfaces that are simple enough to be cost-effective for summarization. IC-Cut then hierarchically decomposes the program into units defined by such functions and their sub-functions in the call graph. These units are tested independently, their test results are recorded as low-complexity function summaries, and the summaries are reused when testing higher-level functions in the call graph, thus limiting overall path explosion. When the decomposed units are tested exhaustively, they constitute verified components of the program. IC-Cut is run dynamically and on-the-fly during the search, typically refining cuts as the search advances. We have implemented this algorithm as a new search strategy in the whitebox fuzzer SAGE, and present detailed experimental results ob-tained when fuzzing the ANI Windows image parser. Our results show that IC-Cut alleviates path explosion while preserving or even increasing code coverage and bug finding, compared to the current generational-search strategy used in SAGE.

    Peak Oil:Die Herausforderung lokaler Erdölabhängigkeit am Beispiel Münster

    Full text link
    Das Erdölzeitalter neigt sich dem Ende zu – daran ändern auch Schieferöle, Agrartreibstoffe oder Verfahren wie das Fracking langfristig nichts. Einer Gruppe von Studierenden an der Uni Münster ging die wissenschaftliche, politische und gesellschaftliche Beschäftigung mit dieser Herausforderung zu langsam. Aus diesem Grund initiierte sie 2012 eigenständig ein interdisziplinäres Peak-Oil-Seminar und begleitete Studierende dabei, in gesellschaftsrelevanten Sektoren der Energieversorgung, des Transports, der lokalen Wirtschaft, der Ernährung, der Gesundheit und der privaten Haushalte eigene Fragestellungen zu entwickeln und diesen nachzugehen. Das Ergebnis ist ein Bericht, der am Beispiel Münster die Brisanz und Aktualität knapper werdender Ressourcen herausstreicht, der die Wichtigkeit der lokalen, vorausschauenden und freiwillig-kreativen Verringerung der Öl-Abhängigkeit hervorhebt und der nicht zuletzt eine Lanze bricht für Formate transformativen und offenen Forschens und Handelns. <br

    Collaborative Verification and Testing with Explicit Assumptions

    Get PDF
    Many mainstream static code checkers make a number of compromises to improve automation, performance, and accuracy. These compromises include not checking certain program properties as well as making implicit, unsound assumptions. Consequently, the results of such static checkers do not provide definite guarantees about program correctness, which makes it unclear which properties remain to be tested. We propose a technique for collaborative verification and testing that makes compromises of static checkers explicit such that they can be compensated for by complementary checkers or testing. Our experiments suggest that our technique finds more errors and proves more properties than static checking alone, testing alone, and combinations that do not explicitly document the compromises made by static checkers. Our technique is also useful to obtain small test suites for partially-verified programs
    corecore